超光谱成像已成为光学成像系统领域的最新趋势。在其他各种应用中,超光谱成像已被广泛用于分析印刷和手写文档。本文提出了一种有效的技术,用于估计超光谱文档图像中存在的不同但明显相似的油墨的数量。我们的方法基于无监督的学习,不需要数据集的任何先验知识。该算法在IVISION HHID数据集上进行了测试,并与文献中存在的算法状态达到了可比的结果。在超光谱文档图像中,在伪造检测的早期阶段使用时,这项工作可能是有效的。
translated by 谷歌翻译
特征提取是图分析中的重要任务。这些特征向量(称为图形描述符)用于基于下游矢量空间的图形分析模型。过去证明了这个想法,基于光谱的图形描述符提供了最新的分类准确性。但是,要计算有意义的描述符的已知算法不会扩展到大图,因为:(1)它们需要将整个图存储在内存中,并且(2)最终用户无法控制算法的运行时。在本文中,我们提出流算法以大约计算三个不同的图形描述符,以捕获图的基本结构。在边缘流上操作使我们避免将整个图存储在内存中,并控制样本大小使我们能够将算法的运行时间保持在所需的范围内。我们通过分析近似误差和分类精度来证明所提出的描述符的功效。我们的可扩展算法计算图形的描述符,并在几分钟之内具有数百万个边缘。此外,这些描述符得出的预测精度可与最新方法相当,但只能使用25%的记忆来计算。
translated by 谷歌翻译
Bipedal robots have received much attention because of the variety of motion maneuvers that they can produce, and the many applications they have in various areas including rehabilitation. One of these motion maneuvers is walking. In this study, we presented a framework for the trajectory optimization of a 5-link (planar) Biped Robot using hybrid optimization. The walking is modeled with two phases of single-stance (support) phase and the collision phase. The dynamic equations of the robot in each phase are extracted by the Lagrange method. It is assumed that the robot heel strike to the ground is full plastic. The gait is optimized with a method called hybrid optimization. The objective function of this problem is considered to be the integral of torque-squared along the trajectory, and also various constraints such as zero dynamics are satisfied without any approximation. Furthermore, in a new framework, there is presented a constraint called impact invariance, which ensures the periodicity of the time-varying trajectories. On the other hand, other constraints provide better and more human-like movement.
translated by 谷歌翻译
The importance of humanoid robots in today's world is undeniable, one of the most important features of humanoid robots is the ability to maneuver in environments such as stairs that other robots can not easily cross. A suitable algorithm to generate the path for the bipedal robot to climb is very important. In this paper, an optimization-based method to generate an optimal stairway for under-actuated bipedal robots without an ankle actuator is presented. The generated paths are based on zero and non-zero dynamics of the problem, and according to the satisfaction of the zero dynamics constraint in the problem, tracking the path is possible, in other words, the problem can be dynamically feasible. The optimization method used in the problem is a gradient-based method that has a suitable number of function evaluations for computational processing. This method can also be utilized to go down the stairs.
translated by 谷歌翻译
Finding and localizing the conceptual changes in two scenes in terms of the presence or removal of objects in two images belonging to the same scene at different times in special care applications is of great significance. This is mainly due to the fact that addition or removal of important objects for some environments can be harmful. As a result, there is a need to design a program that locates these differences using machine vision. The most important challenge of this problem is the change in lighting conditions and the presence of shadows in the scene. Therefore, the proposed methods must be resistant to these challenges. In this article, a method based on deep convolutional neural networks using transfer learning is introduced, which is trained with an intelligent data synthesis process. The results of this method are tested and presented on the dataset provided for this purpose. It is shown that the presented method is more efficient than other methods and can be used in a variety of real industrial environments.
translated by 谷歌翻译
Are extralinguistic signals such as image pixels crucial for inducing constituency grammars? While past work has shown substantial gains from multimodal cues, we investigate whether such gains persist in the presence of rich information from large language models (LLMs). We find that our approach, LLM-based C-PCFG (LC-PCFG), outperforms previous multi-modal methods on the task of unsupervised constituency parsing, achieving state-of-the-art performance on a variety of datasets. Moreover, LC-PCFG results in an over 50% reduction in parameter count, and speedups in training time of 1.7x for image-aided models and more than 5x for video-aided models, respectively. These results challenge the notion that extralinguistic signals such as image pixels are needed for unsupervised grammar induction, and point to the need for better text-only baselines in evaluating the need of multi-modality for the task.
translated by 谷歌翻译
This paper proposes a perception and path planning pipeline for autonomous racing in an unknown bounded course. The pipeline was initially created for the 2021 evGrandPrix autonomous division and was further improved for the 2022 event, both of which resulting in first place finishes. Using a simple LiDAR-based perception pipeline feeding into an occupancy grid based expansion algorithm, we determine a goal point to drive. This pipeline successfully achieved reliable and consistent laps in addition with occupancy grid algorithm to know the ways around a cone-defined track with an averaging speeds of 6.85 m/s over a distance 434.2 meters for a total lap time of 63.4 seconds.
translated by 谷歌翻译
Convolutional Neural Networks (CNN) have shown promising results for displacement estimation in UltraSound Elastography (USE). Many modifications have been proposed to improve the displacement estimation of CNNs for USE in the axial direction. However, the lateral strain, which is essential in several downstream tasks such as the inverse problem of elasticity imaging, remains a challenge. The lateral strain estimation is complicated since the motion and the sampling frequency in this direction are substantially lower than the axial one, and a lack of carrier signal in this direction. In computer vision applications, the axial and the lateral motions are independent. In contrast, the tissue motion pattern in USE is governed by laws of physics which link the axial and lateral displacements. In this paper, inspired by Hooke's law, we first propose Physically Inspired ConsTraint for Unsupervised Regularized Elastography (PICTURE), where we impose a constraint on the Effective Poisson's ratio (EPR) to improve the lateral strain estimation. In the next step, we propose self-supervised PICTURE (sPICTURE) to further enhance the strain image estimation. Extensive experiments on simulation, experimental phantom and in vivo data demonstrate that the proposed methods estimate accurate axial and lateral strain maps.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
We present Masked Audio-Video Learners (MAViL) to train audio-visual representations. Our approach learns with three complementary forms of self-supervision: (1) reconstruction of masked audio and video input data, (2) intra- and inter-modal contrastive learning with masking, and (3) self-training by reconstructing joint audio-video contextualized features learned from the first two objectives. Pre-training with MAViL not only enables the model to perform well in audio-visual classification and retrieval tasks but also improves representations of each modality in isolation, without using information from the other modality for fine-tuning or inference. Empirically, MAViL sets a new state-of-the-art on AudioSet (53.1 mAP) and VGGSound (67.1% accuracy). For the first time, a self-supervised audio-visual model outperforms ones that use external supervision on these benchmarks. Code will be available soon.
translated by 谷歌翻译